Google’s AI chatbot called Bard inaccurately claimed the death toll during the conflict, as per the report. (Bloomberg)AI 

Google and Bing Misinform Public with False Reports of Ceasefire in Israel

Artificial intelligence (AI) chatbots have gained immense popularity worldwide since the introduction of OpenAI’s ChatGPT in November 2022. This technology allows users to access a vast amount of information and customize it according to their needs. Now, even Google Search enables users to enter their queries and obtain answers promptly. By simply asking an AI chatbot, users can receive instant responses. However, it is important to note that the information provided by these chatbots is not always reliable. Recently, Google Bard and Microsoft Bing Chat, two widely used AI chatbots, have faced allegations of disseminating inaccurate information regarding the Israel-Hamas conflict.

Let’s dive deep into it.

AI chatbots report false information

According to a Bloomberg report, Google’s Bard and Microsoft’s AI-powered Bing Search were asked basic questions about the ongoing conflict between Israel and Hamas, and both chatbots inaccurately claimed that a ceasefire was in place. Bloomberg’s Shirin Ghaffary reported in a newsletter: “Google’s Bard told me on Monday that ‘both sides are committed’ to keeping the peace. Microsoft’s AI-powered Bing Chat similarly wrote on Tuesday that a ‘ceasefire means an immediate end to bloodshed.’

Another inaccurate claim by Google Bard was the exact death toll. On October 9, Bard was asked questions about the conflict, where it said the death toll had exceeded “1,300” on October 11, a date that had not even arrived yet.

What causes these errors?

While the exact reason behind this inaccurate reporting of facts is unknown, AI chatbots have been known to misrepresent facts from time to time, and the problem is known as AI hallucinations. For the uninitiated, AI hallucinations are when the Large Language Model (LLM) makes up facts and reports them as absolute truth. This isn’t the first time an AI chatbot has invented facts. In June, there was talk of suing OpenAI for defamation after ChatGPT falsely accused a man of a crime.

This problem has been going on for quite some time and even the people behind AI chatbots are aware of it. Speaking at an event at IIIT Delhi in June, OpenAI founder and CEO Sam Altman said: “It will take us about a year to improve the model. It’s a balance between creativity and accuracy, and we’re trying to minimize the problem. (Right now) I trust the responses that come from ChatGPT the least of anyone of the rest on earth.”

At a time when there is so much misinformation in the world, inaccurate news reporting by AI chatbots raises serious questions about the technology’s reliability.

Related posts

Leave a Comment